26.3 Personalized Medicine
361
the process of selecting a minimal set of markers that can capture most of the signal
from the untyped markers in a genome-wide association study. The central dogma
can be summarized by the argument that if a marker is in tight LD with a poly-
morphism that directly impacts disease risk …then one would be able to detect an
association between the marker and the disease with sample size that was increased
by a [certain] factor …over that needed to detect the effect of the functional variant
directly”. These authors go on to decisively refute the central dogma (of GWAS).
A few years earlier, Pritchard and Cox (2002) had already written that “LD-based
methods work best when there is a single susceptibility allele at any given disease
locus, and generally perform very poorly if there is substantial allelic heterogeneity”.
Despite this, Manolio et al. (2008) were euphoric about the international HapMap
project’s “success”, although given their affiliation with a major funding agency for
the project their viewpoint may lack objectivity (the project was certainly successful
at spending large sums of public money). More pertinent are remarks such as “genetic
variation in chromosome …did not improve on the discrimination or classification
of predicted risk” (Paynter et al. 2009) or “treatment based on genetic testing offers
no benefit compared to ... without testing” (Eckman et al. 2009). In his paper “Con-
siderations for genomewide association studies in Parkinson disease” (PD), Myers
(2006) remarks that “Taken together, these four studies appear to provide substantial
evidence that none of the SNPs originally featured as potential PD loci are convinc-
ingly replicated and that all may be false positives”. It would appear that there is
a great deal of evidence against the “common variants/common disease” (CV/CD)
hypothesis—yet that does not prevent larger and larger studies (currently more than
one million markers) being attempted. Weiss and Terwilliger (2000) were already
sceptical over 20 years ago, and their scepticism has been amply vindicated.
Precision gene editing (targeted genomic sequence changes) has long been a
dream of physicians wishing to cure genetic diseases. It was technically quite effi-
cient in prokaryotes—an early success was the incorporation of the human insulin
gene into E. coli, which was then cultured in order to produce the hormone for
controlling diabetes. Nowadays, the active ingredients for maybe as many as two
hundred medicaments are produced in this biotechnological fashion.
The precision editing of human genomes remained a difficult challenge, however,
until the invention of the CRISPR-Cas (clustered regularly interspaced short palin-
dromic repeats/CRISPR-associated) technique. 23 It enables much faster and more
efficient editing of the human genome compared with conventional techniques. 24
CRISPR-Cas works by using an enzyme called CRISPR-associated protein (Cas)
to recognize and cut specific strands of DNA. The Cas protein binds to a spe-
cific sequence of DNA that is complementary to the CRISPR region in the genome
and then cuts both strands of the DNA. This cutting action is often referred to as
23 Sander and Joung (2014).
24 Conventional gene editing uses homologous recombination (HR). Plants are typically genetically
modified (introduction of new genes, deletion of existing genes, alteration of existing genes) by
transgenesis, which involves the introduction of a foreign gene into the plant’s genome using gene
guns, bacterial vectors or viruses.